#low-latency VLM21/08/2025
LFM2-VL: Liquid AI's Ultra-Fast, Open-Weight Vision-Language Models for On-Device Use
'Liquid AI unveils LFM2-VL, two open-weight vision-language models optimized for fast, low-latency on-device inference, offering 450M and 1.6B variants and easy integration via Hugging Face.'